Inverse-Reference Priors for Fisher Regularization of Bayesian Neural Networks

نویسندگان

چکیده

Recent studies have shown that the generalization ability of deep neural networks (DNNs) is closely related to Fisher information matrix (FIM) calculated during early training phase. Several methods been proposed regularize FIM for increased DNNs. However, they cannot be used directly Bayesian (BNNs) because variable parameters BNNs make it difficult calculate FIM. To address this problem, we achieve regularization by specifying a new suitable prior distribution called inverse-reference (IR) prior. FIM, IR derived as inverse reference imposes minimal knowledge on and maximizes trace We demonstrate can enhance large-scale data over previously priors while providing adequate uncertainty quantifications using various benchmark image datasets BNN structures.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayesian Regularization in Constructive Neural Networks

In this paper, we study the incorporation of Bayesian reg-ularization into constructive neural networks. The degree of regulariza-tion is automatically controlled in the Bayesian inference framework and hence does not require manual setting. Simulation shows that regular-ization, with input training using a full Bayesian approach, produces networks with better generalization performance and low...

متن کامل

Invariance priors for Bayesian feed-forward neural networks

Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformati...

متن کامل

Regularization for Neural Networks

Research into regularization techniques is motivated by the tendency of neural networks to to learn the specifics of the dataset it was trained on rather than learning general features that are applicable to unseen data. This is known as overfitting. The goal of any supervised machine learning task is to approximate a function that maps inputs to outputs, given a dataset of examples and labels....

متن کامل

Model Selection in Bayesian Neural Networks via Horseshoe Priors

Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection—even choosing the number of nodes—remains an open question. In this work, we apply a horseshoe prior over node preactivations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. W...

متن کامل

Likelihoods and Parameter Priors for Bayesian Networks

We develop simple methods for constructing likelihoods and parameter priors for learning about the parameters and structure of a Bayesian network. In particular, we introduce several assumptions that permit the construction of likelihoods and parameter priors for a large number of Bayesian-network structures from a small set of assessments. The most notable assumption is that of likelihood equi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i7.25997